26 research outputs found

    Calibration of a wide angle stereoscopic system

    Full text link
    This paper was published in OPTICS LETTERS and is made available as an electronic reprint with the permission of OSA. The paper can be found at the following URL on the OSA website: http://dx.doi.org/10.1364/OL.36.003064. Systematic or multiple reproduction or distribution to multiple locations via electronic or other means is prohibited and is subject to penalties under law.Inaccuracies in the calibration of a stereoscopic system appear with errors in point correspondences between both images and inexact points localization in each image. Errors increase if the stereoscopic system is composed of wide angle lens cameras. We propose a technique where detected points in both images are corrected before estimating the fundamental matrix and the lens distortion models. Since points are corrected first, errors in point correspondences and point localization are avoided. To correct point location in both images, geometrical and epipolar constraints are imposed in a nonlinear minimization problem. Geometrical constraints define the point localization in relation to its neighbors in the same image, and eipolar constraints represent the location of one point referred to its corresponding point in the other image. © 2011 Optical Society of America.Ricolfe Viala, C.; Sánchez Salmerón, AJ.; Martínez Berti, E. (2011). Calibration of a wide angle stereoscopic system. Optics Letters. 36(16):3064-3067. doi:10.1364/OL.36.003064S306430673616Zhang, Z., Ma, H., Guo, T., Zhang, S., & Chen, J. (2011). Simple, flexible calibration of phase calculation-based three-dimensional imaging system. Optics Letters, 36(7), 1257. doi:10.1364/ol.36.001257Longuet-Higgins, H. C. (1981). A computer algorithm for reconstructing a scene from two projections. Nature, 293(5828), 133-135. doi:10.1038/293133a0Ricolfe-Viala, C., & Sanchez-Salmeron, A.-J. (2010). Lens distortion models evaluation. Applied Optics, 49(30), 5914. doi:10.1364/ao.49.005914Armangué, X., & Salvi, J. (2003). Overall view regarding fundamental matrix estimation. Image and Vision Computing, 21(2), 205-220. doi:10.1016/s0262-8856(02)00154-3Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine Vision and Applications, 13(1), 14-24. doi:10.1007/pl0001326

    Improving Robot Perception Skills Using a Fast Image-Labelling Method with Minimal Human Intervention

    Full text link
    [EN] Featured Application Natural interface to enhance human-robot interactions. The aim is to improve robot perception skills. Robot perception skills contribute to natural interfaces that enhance human-robot interactions. This can be notably improved by using convolutional neural networks. To train a convolutional neural network, the labelling process is the crucial first stage, in which image objects are marked with rectangles or masks. There are many image-labelling tools, but all require human interaction to achieve good results. Manual image labelling with rectangles or masks is labor-intensive and unappealing work, which can take months to complete, making the labelling task tedious and lengthy. This paper proposes a fast method to create labelled images with minimal human intervention, which is tested with a robot perception task. Images of objects taken with specific backgrounds are quickly and accurately labelled with rectangles or masks. In a second step, detected objects can be synthesized with different backgrounds to improve the training capabilities of the image set. Experimental results show the effectiveness of this method with an example of human-robot interaction using hand fingers. This labelling method generates a database to train convolutional networks to detect hand fingers easily with minimal labelling work. This labelling method can be applied to new image sets or used to add new samples to existing labelled image sets of any application. This proposed method improves the labelling process noticeably and reduces the time required to start the training process of a convolutional neural network model.The Universitat Politecnica de Valencia has financed the open access fees of this paper with the project number 20200676 (Microinspeccion de superficies).Ricolfe Viala, C.; Blanes Campos, C. (2022). Improving Robot Perception Skills Using a Fast Image-Labelling Method with Minimal Human Intervention. Applied Sciences. 12(3):1-14. https://doi.org/10.3390/app1203155711412

    Caracterización y optimización del proceso de calibrado de cámaras basado en plantilla bidimensional

    Full text link
    El procedimiento de calibrado de una cámara acaba siendo un paso necesario para la obtención de información 3D del entorno a partir de imágenes 2D del mismo. Existen diferentes técnicas las cuales se basan en fotogrametría o autocalibración. Los métodos basados en fotogrametría capturan una imagen de una escena conocida compuesta por una plantilla tridimensional, bidimensional o unidimensional. Las técnicas de auto calibración se basan en la obtención de varias imágenes de una misma escena aprovechando la rigidez de la misma para establecer restricciones que permitan realizar la calibración de la cámara. Como resultado de la calibración de la cámara se obtienen los parámetros intrínsecos y extrínsecos de la misma. La obtención de todos los parámetros de la cámara mediante calibración, no es exacta debido a imprecisiones que perturban el proceso. Estas imprecisiones surgen por imperfecciones constructivas de las lentes, desalineamientos mecánicos de las mismas o del sensor, y también por procesar la imagen y obtener posiciones de los puntos dentro de ellas. Los resultados dependen tanto de la plantilla de calibración utilizada, como del algoritmos para resolverla, así como del tratamiento previo que se les pueda realizar a los datos. Desde el punto de vista que es imposible obtener una valor exacto para cada uno de los parámetros de la cámara, resulta interesante obtener un intervalo. Estas incertidumbres asociadas a los parámetros de la cámara permitirán mejorar los procedimientos de reconstrucción 3D y de medida que se realicen a partir de los mismos. También, a la hora de calibrar una cámara surgen preguntas acerca del algoritmo o plantilla a utilizar, nº de puntos a colocar en la plantilla, nº de imágenes a tomar de la misma, así como las posiciones y orientaciónes desde las que tomar las imágenes. Esta tesis pretende dar respuesta a todas estas cuestiones. En primer lugar se adopta el método de calibración que mejor resultados obtiene basándose en los métodos eRicolfe Viala, C. (2006). Caracterización y optimización del proceso de calibrado de cámaras basado en plantilla bidimensional [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1858Palanci

    The Influence of Autofocus Lenses in the Camera Calibration Process

    Full text link
    [EN] Camera calibration is a crucial step in robotics and computer vision. Accurate camera parameters are necessary to achieve robust applications. Nowadays, camera calibration process consists of adjusting a set of data to a pin-hole model, assuming that with a reprojection error close to zero, camera parameters are correct. Since all camera parameters are unknown, computed results are considered true. However, the pin-hole model does not represent the camera behavior accurately if the autofocus is considered. Real cameras with autofocus lenses change the focal length slightly to obtain sharp objects in the image, and this feature skews the calibration result if a unique pin-hole model is computed with a constant focal length. In this article, a deep analysis of the camera calibration process is done to detect and strengthen its weaknesses when autofocus lenses are used. To demonstrate that significant errors exist in computed extrinsic parameters, the camera is mounted in a robot arm to know true extrinsic camera parameters with an accuracy under 1 mm. It is also demonstrated that errors in extrinsic camera parameters are compensated with bias in intrinsic camera parameters. Since significant errors exist with autofocus lenses, a modification of the widely accepted camera calibration method using images of a planar template is presented. A pin-hole model with distance-dependent focal length is proposed to improve the calibration process substantially.Ricolfe Viala, C.; Esparza Peidro, A. (2021). The Influence of Autofocus Lenses in the Camera Calibration Process. IEEE Transactions on Instrumentation and Measurement. 70:1-15. https://doi.org/10.1109/TIM.2021.30557931157

    Calibration of a trinocular system formed with wide angle lens cameras

    Full text link
    This paper was published in OPTICS EXPRESS and is made available as an electronic reprint with the permission of OSA. The paper can be found at the following URL on the OSA website: http://dx.doi.org/10.1364/OE.20.027691 . Systematic or multiple reproduction or distribution to multiple locations via electronic or other means is prohibited and is subject to penalties under lawTo obtain 3D information of large areas, wide angle lens cameras are used to reduce the number of cameras as much as possible. However, since images are high distorted, errors in point correspondences increase and 3D information could be erroneous. To increase the number of data from images and to improve the 3D information, trinocular sensors are used. In this paper a calibration method for a trinocular sensor formed with wide angle lens cameras is proposed. First pixels locations in the images are corrected using a set of constraints which define the image formation in a trinocular system. When pixels location are corrected, lens distortion and trifocal tensor is computed.This work was partially funded by the Universidad Politecnica de Valencia research funds (PAID 2010-2431 and PAID 10017), Generalitat Valenciana (GV/2011/057) and by Spanish government and the European Community under the project DPI2010-20814-C02-02 (FEDER-CICYT) and DPI2010-20286 (CICYT).Ricolfe Viala, C.; Sánchez Salmerón, AJ.; Valera Fernández, Á. (2012). Calibration of a trinocular system formed with wide angle lens cameras. Optics Express. 20(25):27691-27696. doi:10.1364/OE.20.027691S27691276962025Hartley, R. I. (1997). International Journal of Computer Vision, 22(2), 125-140. doi:10.1023/a:1007936012022Torr, P. H. ., & Zisserman, A. (1997). Robust parameterization and computation of the trifocal tensor. Image and Vision Computing, 15(8), 591-605. doi:10.1016/s0262-8856(97)00010-3Ricolfe-Viala, C., Sanchez-Salmeron, A.-J., & Martinez-Berti, E. (2011). Accurate calibration with highly distorted images. Applied Optics, 51(1), 89. doi:10.1364/ao.51.000089Ricolfe-Viala, C., & Sanchez-Salmeron, A.-J. (2010). Lens distortion models evaluation. Applied Optics, 49(30), 5914. doi:10.1364/ao.49.005914Ahmed, M., & Farag, A. (2005). Nonmetric calibration of camera lens distortion: differential methods and robust estimation. IEEE Transactions on Image Processing, 14(8), 1215-1230. doi:10.1109/tip.2005.846025Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine Vision and Applications, 13(1), 14-24. doi:10.1007/pl0001326

    Depth-Dependent High Distortion Lens Calibration

    Full text link
    [EN] Accurate correction of high distorted images is a very complex problem. Several lens distortion models exist that are adjusted using different techniques. Usually, regardless of the chosen model, a unique distortion model is adjusted to undistort images and the camera-calibration template distance is not considered. Several authors have presented the depth dependency of lens distortion but none of them have treated it with highly distorted images. This paper presents an analysis of the distortion depth dependency in strongly distorted images. The division model that is able to represent high distortion with only one parameter is modified to represent a depth-dependent high distortion lens model. The proposed calibration method obtains more accurate results when compared to existing calibration methods.The Instituto de Automatica e Informatica Industrial (ai2) of the Universitat Politecnica de Valencia has financed the open access fees of this paper.Ricolfe Viala, C.; Esparza Peidro, A. (2020). Depth-Dependent High Distortion Lens Calibration. Sensors. 20(13):1-12. https://doi.org/10.3390/s20133695S1122013Ricolfe-Viala, C., & Sanchez-Salmeron, A.-J. (2010). Lens distortion models evaluation. Applied Optics, 49(30), 5914. doi:10.1364/ao.49.005914Wieneke, B. (2008). Volume self-calibration for 3D particle image velocimetry. Experiments in Fluids, 45(4), 549-556. doi:10.1007/s00348-008-0521-5Magill, A. A. (1955). Variation in Distortion with Magnification*. Journal of the Optical Society of America, 45(3), 148. doi:10.1364/josa.45.000148Fryer, J. G., & Fraser, C. S. (2006). ON THE CALIBRATION OF UNDERWATER CAMERAS. The Photogrammetric Record, 12(67), 73-85. doi:10.1111/j.1477-9730.1986.tb00539.xAlvarez, L., Gómez, L., & Sendra, J. R. (2010). Accurate Depth Dependent Lens Distortion Models: An Application to Planar View Scenarios. Journal of Mathematical Imaging and Vision, 39(1), 75-85. doi:10.1007/s10851-010-0226-2Ricolfe-Viala, C., Sanchez-Salmeron, A.-J., & Martinez-Berti, E. (2011). Accurate calibration with highly distorted images. Applied Optics, 51(1), 89. doi:10.1364/ao.51.000089Ricolfe-Viala, C., & Sánchez-Salmerón, A.-J. (2010). Robust metric calibration of non-linear camera lens distortion. Pattern Recognition, 43(4), 1688-1699. doi:10.1016/j.patcog.2009.10.003Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine Vision and Applications, 13(1), 14-24. doi:10.1007/pl0001326

    4-Dimensional deformation part model for pose estimation using Kalman filter constraints

    Full text link
    [EN] The goal of this research work is to improve the accuracy of human pose estimation using the deformation part model without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to deformation part model, which was formerly defined based only on RGB channels, to obtain a 4-dimensional deformation part model. In addition, computational complexity can be controlled by reducing the number of joints by taking into account in a reduced 4-dimensional deformation part model. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematic models. The main goal of this article is to analyze the effect on pose estimation accuracy when using a Kalman filter added to 4-dimensional deformation part model partial solutions. The experiments run with two data sets showing that this method improves pose estimation accuracy compared with state-of-the-art methods and that a Kalman filter helps to increase this accuracy.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was partially financed by Plan Nacional de I + D, Comision Interministerial de Ciencia y Tecnologa (FEDERCICYT) under the project DPI2013-44227-R.Martínez Bertí, E.; Sánchez Salmerón, AJ.; Ricolfe Viala, C. (2017). 4-Dimensional deformation part model for pose estimation using Kalman filter constraints. International Journal of Advanced Robotic Systems. 14(3):1-13. https://doi.org/10.1177/1729881417714230S11314

    Efficient lens distortion correction for decoupling in calibration of wide angle lens cameras

    Full text link
    © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In photogrammetry applications, camera parameters must be as accurate as possible to avoid deviations in measurements from images. Errors increase if wide angle lens cameras are used. Moreover, the coupling between intrinsic and extrinsic camera parameters and the lens distortion model influences the result of the calibration process notably. This paper proposes a method for calibrating wide angle lens cameras, which takes into account the existing hard coupling. The proposed method obtains stable results, which do not depend on how the image lens distortion is corrected.This work was supported in part by the Universidad Politecnica de Valencia research funds (PAID 2010-2431 and PAID 10017), the Generalitat Valenciana (GV/2011/057) and the Spanish government and the European Community under Project DPI2010-20814-C02-02 (FEDER-CICYT) and Project DPI2010-20286 (CICYT). The associate editor coordinating the review of this paper and approving it for publication was Dr. Subhas C. Mukhopadhyay.Ricolfe Viala, C.; Sánchez Salmerón, AJ.; Valera Fernández, Á. (2013). Efficient lens distortion correction for decoupling in calibration of wide angle lens cameras. IEEE Sensors Journal. 13(2):854-863. https://doi.org/10.1109/JSEN.2012.2229704S85486313

    Fall detection based on the gravity vector using a wide-angle camera

    Full text link
    Falls in elderly people are becoming an increasing healthcare problem, since life expectancy and the number of elderly people who live alone have increased over recent decades. If fall detection systems could be installed easily and economically in homes, telecare could be provided to alleviate this problem. In this paper we propose a low cost fall detection system based on a single wide-angle camera. Wide-angle cameras are used to reduce the number of cameras required for monitoring large areas. Using a calibrated video system, two new features based on the gravity vector are introduced for fall detection. These features are: angle between the gravity vector and the line from feet to head of the human and size of the upper body. Additionally, to differentiate between fall events and controlled lying down events the speed of changes in the features is also measured. Our experiments demonstrate that our system is 97% accurate for fall detection. (C) 2014 Elsevier Ltd. All rights reserved.This work was partially financed by Programa Estatal de Investigacion, Desarrollo e Innovacion Orientada a los Retos de la Sociedad (Direccion General de Investigacion Cientifica y Tecnica, Ministerio de Economia y Competitividad) under the project DPI2013-44227-R.Bosch Jorge, M.; Sánchez Salmerón, AJ.; Valera Fernández, Á.; Ricolfe Viala, C. (2014). Fall detection based on the gravity vector using a wide-angle camera. Expert Systems with Applications. 41(17):7980-7986. https://doi.org/10.1016/j.eswa.2014.06.045S79807986411

    A Database for Learning Numbers by Visual Finger Recognition in Developmental Neuro-Robotics

    Get PDF
    Numerical cognition is a fundamental component of human intelligence that has not been fully understood yet. Indeed, it is a subject of research in many disciplines, e.g., neuroscience, education, cognitive and developmental psychology, philosophy of mathematics, linguistics. In Artificial Intelligence, aspects of numerical cognition have been modelled through neural networks to replicate and analytically study children behaviours. However, artificial models need to incorporate realistic sensory-motor information from the body to fully mimic the children's learning behaviours, e.g., the use of fingers to learn and manipulate numbers. To this end, this article presents a database of images, focused on number representation with fingers using both human and robot hands, which can constitute the base for building new realistic models of numerical cognition in humanoid robots, enabling a grounded learning approach in developmental autonomous agents. The article provides a benchmark analysis of the datasets in the database that are used to train, validate, and test five state-of-the art deep neural networks, which are compared for classification accuracy together with an analysis of the computational requirements of each network. The discussion highlights the trade-off between speed and precision in the detection, which is required for realistic applications in robotics
    corecore